Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation

Search Page

Filters

My NCBI Filters

Results by year

Table representation of search results timeline featuring number of search results per year.

Year Number of Results
1910 8
1918 1
1920 2
1935 1
1947 1
1949 3
1950 1
1951 1
1952 1
1953 1
1954 2
1955 4
1956 2
1957 3
1958 4
1959 3
1960 2
1961 2
1963 4
1964 3
1965 1
1966 1
1967 2
1968 5
1969 2
1970 3
1971 10
1972 3
1973 2
1974 4
1975 10
1976 14
1977 9
1978 16
1979 6
1980 14
1981 12
1982 9
1983 13
1984 13
1985 34
1986 19
1987 16
1988 25
1989 27
1990 22
1991 24
1992 23
1993 22
1994 30
1995 34
1996 28
1997 23
1998 35
1999 32
2000 24
2001 36
2002 41
2003 45
2004 34
2005 45
2006 66
2007 80
2008 73
2009 65
2010 79
2011 82
2012 98
2013 120
2014 124
2015 112
2016 111
2017 100
2018 139
2019 169
2020 300
2021 572
2022 1450
2023 2990
2024 1525

Text availability

Article attribute

Article type

Publication date

Search Results

8,492 results

Results by year

Filters applied: . Clear all
The following term was not found in PubMed: balance-to-unbalance
Page 1
Vicinity Vision Transformer.
Sun W, Qin Z, Deng H, Wang J, Zhang Y, Zhang K, Barnes N, Birchfield S, Kong L, Zhong Y. Sun W, et al. IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):12635-12649. doi: 10.1109/TPAMI.2023.3285569. Epub 2023 Sep 5. IEEE Trans Pattern Anal Mach Intell. 2023. PMID: 37310842
Vision transformers have shown great success on numerous computer vision tasks. ...Finally, to validate the proposed methods, we build a linear vision transformer backbone named Vicinity Vision Transformer (VVT). Targeting general vision tasks, we build VVT i …
Vision transformers have shown great success on numerous computer vision tasks. ...Finally, to validate the proposed methods, we buil …
Evolutionary Dual-Stream Transformer.
Zhang R, Jiao L, Li L, Liu F, Liu X, Yang S. Zhang R, et al. IEEE Trans Cybern. 2024 Apr;54(4):2166-2178. doi: 10.1109/TCYB.2022.3213537. Epub 2024 Mar 18. IEEE Trans Cybern. 2024. PMID: 36279360
Vision transformers (ViTs) are rapidly evolving and are widely used in computer vision. ...The DST model uses a dual-branch structure to fuse convolutional and transformer features. Combining the features learned by the transformer and convolution effectively …
Vision transformers (ViTs) are rapidly evolving and are widely used in computer vision. ...The DST model uses a dual-branch structure …
Transformers in medical imaging: A survey.
Shamshad F, Khan S, Zamir SW, Khan MH, Hayat M, Khan FS, Fu H. Shamshad F, et al. Med Image Anal. 2023 Aug;88:102802. doi: 10.1016/j.media.2023.102802. Epub 2023 Apr 5. Med Image Anal. 2023. PMID: 37315483 Review.
Specifically, we survey the use of Transformers in medical image segmentation, detection, classification, restoration, synthesis, registration, clinical report generation, and other tasks. ...We hope this survey will ignite further interest in the community and provide res …
Specifically, we survey the use of Transformers in medical image segmentation, detection, classification, restoration, synthesis, reg …
Uncover This Tech Term: Transformers.
Gupta A, Rangarajan K. Gupta A, et al. Korean J Radiol. 2024 Jan;25(1):113-115. doi: 10.3348/kjr.2023.0948. Korean J Radiol. 2024. PMID: 38184774 Free PMC article. No abstract available.
Attention-Guided Collaborative Counting.
Mo H, Ren W, Zhang X, Yan F, Zhou Z, Cao X, Wu W. Mo H, et al. IEEE Trans Image Process. 2022;31:6306-6319. doi: 10.1109/TIP.2022.3207584. Epub 2022 Oct 10. IEEE Trans Image Process. 2022. PMID: 36178989
The loss functions do not require additional labels and crowd division. In addition, we design two kinds of bidirectional transformers (Bi-Transformers) to decouple the global attention to row attention and column attention. The proposed Bi-Transformers are a …
The loss functions do not require additional labels and crowd division. In addition, we design two kinds of bidirectional transformers
A Survey on Vision Transformer.
Han K, Wang Y, Chen H, Chen X, Guo J, Liu Z, Tang Y, Xiao A, Xu C, Xu Y, Yang Z, Zhang Y, Tao D. Han K, et al. IEEE Trans Pattern Anal Mach Intell. 2023 Jan;45(1):87-110. doi: 10.1109/TPAMI.2022.3152247. Epub 2022 Dec 5. IEEE Trans Pattern Anal Mach Intell. 2023. PMID: 35180075
The main categories we explore include the backbone network, high/mid-level vision, low-level vision, and video processing. We also include efficient transformer methods for pushing transformer into real device-based applications. Furthermore, we also take a brief l …
The main categories we explore include the backbone network, high/mid-level vision, low-level vision, and video processing. We also include …
Transformers in medical image segmentation: a narrative review.
Khan RF, Lee BD, Lee MS. Khan RF, et al. Quant Imaging Med Surg. 2023 Dec 1;13(12):8747-8767. doi: 10.21037/qims-23-542. Epub 2023 Oct 7. Quant Imaging Med Surg. 2023. PMID: 38106306 Free PMC article. Review.
BACKGROUND AND OBJECTIVE: Transformers, which have been widely recognized as state-of-the-art tools in natural language processing (NLP), have also come to be recognized for their value in computer vision tasks. ...Through this survey, we summarize the popular and unconven …
BACKGROUND AND OBJECTIVE: Transformers, which have been widely recognized as state-of-the-art tools in natural language processing (N …
Dynamic Unary Convolution in Transformers.
Duan H, Long Y, Wang S, Zhang H, Willcocks CG, Shao L. Duan H, et al. IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):12747-12759. doi: 10.1109/TPAMI.2022.3233482. Epub 2023 Oct 3. IEEE Trans Pattern Anal Mach Intell. 2023. PMID: 37018310
It is uncertain whether the power of transformer architectures can complement existing convolutional neural networks. ...Both qualitative and quantitative results show our parallel convolutional-transformer approach with dynamic and unary convolution outperforms exi …
It is uncertain whether the power of transformer architectures can complement existing convolutional neural networks. ...Both qualita …
Coarse-to-Fine Multi-Scene Pose Regression With Transformers.
Shavit Y, Ferens R, Keller Y. Shavit Y, et al. IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14222-14233. doi: 10.1109/TPAMI.2023.3310929. Epub 2023 Nov 3. IEEE Trans Pattern Anal Mach Intell. 2023. PMID: 37651496
In this work, we propose to learn multi-scene absolute camera pose regression with Transformers, where encoders are used to aggregate activation maps with self-attention and decoders transform latent features and scenes encoding into pose predictions. This allows our model …
In this work, we propose to learn multi-scene absolute camera pose regression with Transformers, where encoders are used to aggregate …
A Survey of Visual Transformers.
Liu Y, Zhang Y, Wang Y, Hou F, Yuan J, Tian J, Zhang Y, Shi Z, Fan J, He Z. Liu Y, et al. IEEE Trans Neural Netw Learn Syst. 2023 Mar 30;PP. doi: 10.1109/TNNLS.2022.3227717. Online ahead of print. IEEE Trans Neural Netw Learn Syst. 2023. PMID: 37015131
Transformer, an attention-based encoder-decoder model, has already revolutionized the field of natural language processing (NLP). ...Because of their differences on training settings and dedicated vision tasks, we have also evaluated and compared all these existing visual
Transformer, an attention-based encoder-decoder model, has already revolutionized the field of natural language processing (NLP). ...
8,492 results